697 research outputs found

    Personalized content retrieval in context using ontological knowledge

    Get PDF
    Personalized content retrieval aims at improving the retrieval process by taking into account the particular interests of individual users. However, not all user preferences are relevant in all situations. It is well known that human preferences are complex, multiple, heterogeneous, changing, even contradictory, and should be understood in context with the user goals and tasks at hand. In this paper, we propose a method to build a dynamic representation of the semantic context of ongoing retrieval tasks, which is used to activate different subsets of user interests at runtime, in a way that out-of-context preferences are discarded. Our approach is based on an ontology-driven representation of the domain of discourse, providing enriched descriptions of the semantics involved in retrieval actions and preferences, and enabling the definition of effective means to relate preferences and context

    Personalized information retrieval based on context and ontological knowledge

    Get PDF
    The article has been accepted for publication and appeared in a revised form, subsequent to peer review and/or editorial input by Cambridge University PressExtended papers from C&O-2006, the second International Workshop on Contexts and Ontologies, Theory, Practice and Applications1 collocated with the seventeenth European Conference on Artificial Intelligence (ECAI)Context modeling has been long acknowledged as a key aspect in a wide variety of problem domains. In this paper we focus on the combination of contextualization and personalization methods to improve the performance of personalized information retrieval. The key aspects in our proposed approach are a) the explicit distinction between historic user context and live user context, b) the use of ontology-driven representations of the domain of discourse, as a common, enriched representational ground for content meaning, user interests, and contextual conditions, enabling the definition of effective means to relate the three of them, and c) the introduction of fuzzy representations as an instrument to properly handle the uncertainty and imprecision involved in the automatic interpretation of meanings, user attention, and user wishes. Based on a formal grounding at the representational level, we propose methods for the automatic extraction of persistent semantic user preferences, and live, ad-hoc user interests, which are combined in order to improve the accuracy and reliability of personalization for retrieval.This research was partially supported by the European Commission under contracts FP6-001765 aceMedia and FP6-027685 MESH. The expressed content is the view of the authors but not necessarily the view of the aceMedia or MESH projects as a whole

    Exploring the adoption of physical security controls in smartphones

    Get PDF
    The proliferation of smartphones has changed our life due to the enhanced connectivity, increased storage capacity and innovative functionality they offer. Their increased popularity has drawn the attention of attackers, thus, nowadays their users are exposed to many security and privacy threats. The fact that smartphones store significant data (e.g. personal, business, government, etc.) in combination with their mobility, increase the impact of unauthorized physical access to smartphones. However, past research has revealed that this is not clearly understood by smartphone users, as they disregard the available security controls. In this context, this paper explores the attitudes and perceptions towards security controls that protect smartphone user’s data from unauthorized physical access. We conducted a survey to measure their adoption and the rea-sons behind users’ selections. Our results, suggest that nowadays users are more concerned about their physical security, but still reveal that a considerable portion of our sample is prone to unauthorized physical access

    Placing User-Generated Photo Metadata on a Map

    Full text link

    Collaborative Gaze Channelling for Improved Cooperation During Robotic Assisted Surgery

    Get PDF
    The use of multiple robots for performing complex tasks is becoming a common practice for many robot applications. When different operators are involved, effective cooperation with anticipated manoeuvres is important for seamless, synergistic control of all the end-effectors. In this paper, the concept of Collaborative Gaze Channelling (CGC) is presented for improved control of surgical robots for a shared task. Through eye tracking, the fixations of each operator are monitored and presented in a shared surgical workspace. CGC permits remote or physically separated collaborators to share their intention by visualising the eye gaze of their counterparts, and thus recovers, to a certain extent, the information of mutual intent that we rely upon in a vis-à-vis working setting. In this study, the efficiency of surgical manipulation with and without CGC for controlling a pair of bimanual surgical robots is evaluated by analysing the level of coordination of two independent operators. Fitts' law is used to compare the quality of movement with or without CGC. A total of 40 subjects have been recruited for this study and the results show that the proposed CGC framework exhibits significant improvement (p<0.05) on all the motion indices used for quality assessment. This study demonstrates that visual guidance is an implicit yet effective way of communication during collaborative tasks for robotic surgery. Detailed experimental validation results demonstrate the potential clinical value of the proposed CGC framework. © 2012 Biomedical Engineering Society.link_to_subscribed_fulltex
    corecore